Join our Folding@Home team:
Main F@H site
Our team page
Support us: Subscribe Here
and buy SoylentNews Swag
We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.
The data centers at the heart of the AI boom are producing so much heat that they're spiking land temperatures for miles around them by up to 16 degrees Fahrenheit, new research suggests. The effect is so pronounced that the researchers say they're creating entire "heat islands:"
The data centers at the heart of the AI boom are producing so much heat that they're spiking land temperatures for miles around them by up to 16 degrees Fahrenheit, new research suggests. The effect is so pronounced that the researchers say they're creating entire "heat islands."
The findings, detailed in a study that's yet-to-be-peer-reviewed, add to an already grim picture of the environmental impact of these sprawling facilities, the largest of which consume enough energy to power entire cities. Their commensurate greenhouse gas emissions, however, apparently aren't the only way data centers are heating up the world around them.
The researchers focused on roughly 8,400 so-called "hyperscalers," the term used to describe data centers of incredible size that offer cloud computing and AI services. Their construction has surged in the past decade, and the AI boom has pushed their demand and scope to new heights; Meta's new "Hyperion" data center, for example, cost $27 billion to build and has an expected computing capacity of five gigawatts, an appetite that takes ten gas-powered plants to sate.
[...] The effects were local, but far reaching. The researchers found that the temperature increases were felt up to 6.2 miles away — though they dropped off with distance — in all affecting more than 340 million people. CNN's coverage notes that the trend held globally: Mexico's burgeoning data center hub in Bajio saw an uptick of around 3.6 degrees over the past 20 years, as did Aragon, Spain, itself a hot new hub for hyperscalers.
Link to Study: The data heat island effect: quantifying the impact of AI data centers in a warming world
https://go.theregister.com/feed/www.theregister.com/2026/03/27/security_boffins_harvest_bumper_crop/
Computer security boffins have conducted an analysis of 10 million websites and found almost 2,000 API credentials strewn across 10,000 webpages.
The researchers detail their findings in a preprint paper titled "Keys on Doormats: Exposed API Credentials on the Web," and say they conducted the study because much of the attention on exposed credentials has focused on scouring code repositories and source code. They argue that dynamic analysis of production websites is essential to understand the scope of the problem.
"What we found were highly sensitive API credentials left publicly exposed on public webpages," Nurullah Demir, a PhD candidate at Stanford and corresponding author, told The Register in an email. "These act as access tokens that authorize applications to interact with third-party services, granting direct access to critical infrastructure like cloud platforms and payment providers."
Demir contends that API credentials are even more dangerous than exposed login details because they provide programmatic access to resources.
The researchers scanned approximately 10 million websites using a tool called TruffleHog, and found 1,748 valid credentials belonging to organizations including multinational corporations, critical infrastructure entities, and government agencies. The keys provide access to services like AWS, GitHub, Stripe, and OpenAI.
Demir said one of the affected organizations was a global bank. Another makes firmware for electronic devices.
"A 'Global Systemically Important Financial Institution' exposed its cloud credentials directly on its webpages," said Demir. "This gave direct access to multiple core cloud infrastructure services, including databases and key management systems."
The researchers also found repository credentials for a developer responsible for firmware used by various manufacturers of drones and remote-controlled devices. Attackers could use those credentials to modify source code and push malicious firmware updates to various devices, Demir said.
"Exposure is widespread across service categories, with cloud services (e.g., AWS, Cloudflare) and payment services (e.g., Stripe, Razorpay) accounting for the majority of verified credentials," the paper explains. "AWS credentials alone represent more than 16 percent of all verified exposures and were found on over 4,693 websites. Email and communication services such as SendGrid and Twilio also appear frequently, with a significant portion of their exposures originating from embedded third-party resources."
Most of the credentials the researchers found were present in JavaScript resources (84 percent), followed by HTML (eight percent) and JSON (seven percent) files. They also turned up unusual cases like a verified GitHub access token embedded in a CSS file.
In JavaScript files, 62 percent of credential exposures show up in bundles created by build tools like Webpack.
Demir said he and his co-authors – Yash Vekaria of UC Davis, Georgios Smaragdakis from TU Delft/Stanford, and Zakir Durumeric from Stanford – made a significant effort to contact affected organizations. The number of exposed credentials declined by half in about two weeks after the researchers started to report their findings.
"When we got feedback from the developers, we saw that a significant number of them were completely unaware of the exposures," he explained. "What is perhaps most concerning is that our historical analysis showed these credentials often remain exposed for an average of 12 months, in some cases for years."
Demir said that he and his co-authors only verified credentials for 14 different service providers, so the exposure figure represents a lower bound.
"We strongly believe that the actual number of exposed credentials across the web is much higher than what we captured in this study," he said. ®
https://linuxiac.com/vitruvianos-0-3-debuts-as-haiku-inspired-linux-os/
VitruvianOS 0.3 has been released as the project’s first publicly available version, described by its developers as a pilot build. It is based on the Linux kernel and adopts a design inspired by Haiku OS and BeOS.
For reference, VitruvianOS’s development began in 2019, and now, in 2026, this version serves more as a functional foundation rather than a complete system. But before we go further, a few words about the project itself, since the name is probably unfamiliar to the general public.
VitruvianOS is not a Linux distribution in the usual sense. It uses the Linux kernel only for hardware support, while replacing the standard Linux userland and desktop stack with its own components. Its goal is to combine Linux compatibility with a BeOS-style architecture.
Let me explain. In a typical Linux desktop system, applications run on top of libraries and a display server such as X11 or Wayland. However, VitruvianOS removes this entire layer. It does not use X11 nor Wayland. Instead, it implements its own graphics system, input handling, and application runtime.
A key feature is Nexus, an internal communication layer that manages messaging between system components.
The system features native desktop elements modeled after BeOS, including a Deskbar and a Tracker-style file manager. It also offers a compatibility layer to support applications built for Haiku and BeOS APIs.
Moreover, the system uses a Linux kernel with real-time patches. Regarding filesystems, VitruvianOS 0.3 supports XFS and SquashFS, as well as extended attributes.
In the announcement, the developers have also outlined a short-term roadmap. Version 0.3.1 will add missing components and bug fixes based on initial testing. Version 0.3.2 aims to move the system toward self-hosting, enabling VitruvianOS to build itself.
Next, the upcoming 0.4 release will focus on stability and broader hardware support, including ongoing ARM port development. Planned improvements also include enhanced input handling, a complete keymap system, and further user interface refinements.
For more details, see the announcement.
Finally, once again: keep in mind that VitruvianOS 0.3 is an experimental release intended mainly for testing and development.
-- Related:
The 2nd crew member of the F15E shot down over Iran has been successfully rescued. The Pentagon has released that he is a colonel and was the Weapons Systems Officer on board the aircraft. He has suffered 'minor injuries' but is otherwise reported to be in good condition.
A large operation involving over a 'dozen' aircraft was mounted and he was recovered from high ground in a mountainous region. The rescue was conducted under fire from Iranian forces. No casualties have been reported.
https://www.siliconrepublic.com/innovation/ul-research-aeronautical-engineering-3d-metals
UL's Dr Kyriakos Kourousis discusses his current research in metal additive manufacturing and the work of the Metal Plasticity and Additive Manufacturing Group at UL.
Dr Kyriakos Kourousis is an associate professor in aeronautical engineering at University of Limerick (UL), as well as director of postgraduate research and education for the university’s Faculty of Science & Engineering. He also leads UL’s Metal Plasticity and Additive Manufacturing Group.
Kourousis joined UL’s School of Engineering 12 years ago, and before his career in academia, he spent more than a decade as an aeronautical engineer in the Hellenic Air Force working on aircraft maintenance, airworthiness and structural integrity – experience that he says now shapes his research and teaching.
At UL, he teaches topics around aircraft systems, the airworthiness of aircraft and the practical engineering behind them.
In terms of his current research, Kourousis says his work focuses on two things: how metals behave when they are loaded in a repeated way, leading to permanent deformation – “what engineers call metal plasticity” – and how to make and trust 3D‑printed metal parts (metal additive manufacturing), “especially for those loading conditions that cause plasticity”.
“In simple terms, we test metals, study their microstructure, build computer models that predict how they’ll perform over time, and use those models to predict how permanent deformation builds up during their operation,” he tells SiliconRepublic.com.
“Localised permanent deformation (plasticity) is the origin of fatigue in metals. My work is both on traditional metals and 3D‑printed ones.”
Here, Kourousis tells us about his work and provides a look into the world of 3D-printed materials and aeronautical engineering.
As 3D‑printed metal parts move from prototypes to real aircraft and machinery, we need to predict their behaviour with confidence. Experimental data and models help engineers design parts that won’t crack or fail early, and help industry and regulators build the evidence needed for certification. In short, better predictions mean safer, lighter, more efficient products.
Also, from a sustainability point of view, the use and reuse of powder in metal additive manufacturing offers an important advantage over other (traditional) manufacturing processes. However, with each reuse cycle, the recycled powder changes its synthesis and overall ‘quality’, which can have an effect on the produced parts, especially in terms of their plasticity behaviour.
One key finding is how directional 3D‑printed metals can be and what causes this directionality. For example, we showed that changing the build orientation and the post-3D printing processing of steel parts via heat treatments can noticeably change how it stretches and yields. We saw similar effects in 3D-printed titanium, in particular Ti‑6Al‑4V, which is widely used in the aerospace and biomedical industries.
We’ve also found that even lower‑cost metal 3D printing routes (like material‑extrusion/fused filament fabrication) show clear links between print settings and mechanical performance, useful for small/medium companies exploring affordable metal additive manufacturing.
3D‑printed metals aren’t ‘just like’ traditional (wrought) metals. The layer‑by‑layer process creates a directional ‘grain’, so properties change with build direction, clearly shown in our work on steel and titanium. Process signatures matter. Printing can leave tiny pores (lack‑of‑fusion or keyhole) and locked‑in residual stresses; tuning scan strategy and energy helps, but these features still drive plasticity and fatigue if not managed.
An interesting debate I have with colleagues working in material science is that 3D-printed material may appear as having uniform features in the microscale, but the higher scale defects caused by the melting-solidification and re-melting can lead to a quite non-homogeneous part with differing mechanical properties at different loading directions (mechanical anisotropy).
Post‑processing can close the loop. Ageing/stress‑relief and especially hot isostatic pressing (HIP) homogenise the microstructure and seal pores, boosting ductility and fatigue, though outcomes depend on the as‑built quality and the budget available. A key target for the manufacturing industry is to make 3D printing not only accurate and consistent but also affordable, and we see that there is more work that has to be done there.
The big shift is the coming‑together of accessible metal 3D‑printing equipment with advanced, physics‑based modelling.
At UL, a milestone was obtaining a GE Concept Laser Mlab Cusing R metal 3D printer through a GE Additive award. Unlike other institutions in Ireland, our 3D printer is hosted within an industrial environment, through a collaborative agreement with our partner, Croom Medical. Our students and researchers can test ideas under realistic conditions, while both UL and Croom Medical leverage the advantages of this strategic partnership.
Our research group leads the metal additive manufacturing research activity in UL.
Our work is built around two main strands: metal plasticity modelling, where we turn lab data into reliable models of how metals actually deform; and metal additive manufacturing, where we study and improve metals such as titanium and steel, translating the results into practical build and heat‑treatment guidelines. Current projects and student work span physics‑informed yield prediction for steel 316L, laser powder bed fusion (the most widely used additive manufacturing method for metals) process optimisation, and corrosion-cyclic plasticity topics for aerospace‑grade alloys.
An interesting recent work involved showing that, by carefully retuning laser power, scan speed and hatch spacing, we can shift from the usual thin‑layer settings to much thicker layers in laser powder bed fusion of aerospace‑grade titanium, while keeping the process stable and parts dense. Led by one of our doctoral researchers who also works with Croom Medical, the study showed that those thicker‑layer builds delivered strength and ductility on a par with conventional settings, indicating that productivity can rise without an automatic hit to material performance.
Most importantly, after standard vacuum heat treatment and hot‑isostatic pressing, the parts satisfied the relevant industry standards, pointing to a practical path to higher throughput that still fits certification expectations.
On Wednesday, Apple unveiled new device-level age restrictions in the UK. After downloading a new update, users will now have to confirm that they are 18 or older to access unrestricted features.
Users will be able to confirm their age with a credit card or by scanning an ID.
For those underage or who have not confirmed their age, Apple will turn on Web Content Filter and Communication Safety, which will not only restrict access to certain apps or websites, but will also monitor messages, shared photo albums, AirDrop, and FaceTime calls for nudity.
Apple didn’t specify exactly which services and features are banned for under-18 users, but it will likely be in compliance with UK legislation. Gizmodo reached out to the Cupertino giant for comment, and we’ll update this post when we receive a reply.
The British government does not require Apple and other OS providers to institute device-level age checks, but it does restrict minor access to online pornography under the Online Safety Act, which passed in 2023. So far, that restriction has only been implemented at the website level, but UK officials have been worried about easy loopholes to evade the age restrictions, like VPNs.
The broader tech industry has been campaigning for some time to use device-level age checks instead in response to the rising tide of under-16 social media and internet bans around the world.
Last month, in a landmark social media trial in California, Meta CEO Mark Zuckerberg also supported this idea, saying that conducting age verification “at the level of the phone is just a lot clearer than having every single app out there have to do this separately.”
Pornhub-operator Aylo had advocated for device-level restrictions in the UK as well, and even sent out letters to Apple, Google, and Microsoft in November asking for OS-level age verification. At the time, British authorities had responded to Aylo, saying that OS-level restrictions would have to be industry-led, as nothing was stopping these tech companies from implementing the method and showing evidence of its effectiveness.
The most obvious question: Could this be brought stateside?
Many states have already passed legislation restricting the activity of minors on the internet. Apple began working with Texas authorities late last year on the state’s new age restrictions that have since drawn legal backlash. Last month, the company announced that new users in Utah and Louisiana will have their age categories shared with the App Store starting this summer, to ensure compliance with the new age restriction laws in the states.
The regulatory momentum is only growing in the United States, and states are increasingly seeking device-level restrictions. California passed its Digital Age Assurance Act last year, and the law would require users to enter their date of birth when setting up a new phone or computer to ensure OS-level restrictions when it goes into effect next year.
Colorado is also seeking to follow in California’s footsteps. Earlier this year, state legislators introduced a device-level age restriction bill modeled after California’s.
In space no one can hear you scream -- at Microsoft:
Many a frustrated user has sworn they'll launch Microsoft Outlook into space, but NASA has actually done it – on a journey around the Moon, where it's now causing problems for astronauts.
The astronauts aboard the Orion spacecraft currently circling the Earth are taking care of a bunch of housekeeping tasks, including getting their devices working. Judging by some space-to-ground communications with controllers at Houston, it isn't going well.
NASA has helpfully provided a YouTube channel showing live views from the Orion spacecraft, as well as snippets of communication. During this stream, one of the astronauts can be heard first asking for help with network connectivity (IT support staff will be delighted to know that one troubleshooting step involves turning the device off and on) before telling controllers, "I have two Microsoft Outlooks, and neither one of those are working."
Multiple Outlooks is something that is all too familiar to many Windows users. A year ago, the acceptable face of development at Microsoft, Scott Hanselman, parodied the situation by listing some tongue-in-cheek variants to go with Outlook (Classic) and Outlook (New). How about Outlook (Zero Sugar), Outlook (Caffeine Free), and so on? The Orion 'nauts could well be looking at Outlook (Deep Space), Outlook (Low Earth Orbit), or even Outlook (Tentacle Edition).
And, for at least one of the four Artemis II crew members, none of the Outlooks is working.
Even if you go 384,000 km away, you still can't get away from your email.
Update: As of Saturday morning, Artemis 2 is now closer to the moon than it is to earth. [JR-04012026-0640utc]
Within hours of launching four astronauts on NASA's Artemis 2 mission around the moon, its crew reported a glitch in what may have been the most anticipated new creature comfort of their Orion spacecraft: their space toilet.
Artemis 2 mission specialist Christina Koch noted an issue starting up part of the Orion capsule's toilet — which NASA calls the Universal Waste Management System — that deals with urine collection.
"The toilet fan is reported to be jammed," NASA spokesperson Gary Jordan said during live mission commentary. "Now the ground teams are coming up with instructions on how to get into the fan and clear that area to revive the toilet for the mission."
Norm Knight, NASA's director of flight operations, told reporters here at the Kennedy Space Center that the malfunction was due to a controller issue on the toilet. But NASA confirmed astronauts could still use the space commode to poop, just not urinate, though engineers were working to restore it to full service.
"In the meantime they're getting their contingency — their backup waste management capabilities specifically for urine," Jordan said. "The fecal collection of the toilet, that specific capability, can still be used with the waste management system aboard Orion."
NASA astronaut Christina Koch works with a test version of the Orion space toilet.
Artemis 2 mission specialist Christina Koch (right) works with a test version of the Orion space toilet. | Credit: NASA
More
A few hours after Koch reported the toilet issue to Mission Control, flight controllers walked her through a series of steps to try and fix it."Houston, Integrity, good checkout," Koch said after trying the fix.
Then, some relieving news.
"Happy to report that toilet is go for use," Mission Control's Capcom Amy Dill radioed Koch. "We do recommend letting the system get to operating speed before donating fluid, and then letting it run a little bit after donation."
"We are cheers all around, and we will do that," Koch replied.
It does sound like at least one crewmember used a contingency bag before the fix. Koch reported that one CCU, or Collapsible Contingency Urinal, was full and needed to be emptied overboard. Dill radioed up instructions on the best time for that dump, and all was well.
That may be a relief for the Artemis 2 astronauts, in more ways than one. NASA's Apollo astronauts did not have the luxury of a toilet when they flew to the moon in the 1960s and 1970s. They peed and pooped in plastic bags, then stowed the solid waste and vented urine overboard into space.
The toilet aboard Orion is a smaller, more compact version of the bathrooms on the International Space Station. It's built into the floor of the Orion capsule and allows Artemis 2 astronauts some privacy while taking care of business. While the Orion spacecraft is larger than NASA's Apollo capsules, it's still cramped — the interior has been compared to that of two SUVs.
The global shortage of solid state memory has claimed its first photographic victim, as Sony has announced that it is suspending fulfillment of all orders for nearly its entire SD and CFexpress memory card product lines.
Sony Japan published the notice on its website today:
Thank you for your continued patronage of Sony products.
Due to the global shortage of semiconductors (memory) and other factors, it is anticipated that supply will not be able to meet demand for CFexpress memory cards and SD memory cards for the foreseeable future. Therefore, we have decided to temporarily suspend the acceptance of orders from our authorized dealers and from customers at the Sony Store from March 27, 2026 onwards.
Regarding the resumption of order acceptance, we will consider it while monitoring the supply situation and will announce it separately on the product information page.
GitHub will now use developer data to train its AI models by default:
GitHub has confirmed it will begin using developer interaction data to train its artificial intelligence models, marking a significant shift in how user data is handled across its platform.
The move, set to take effect on April 24, introduces an opt-out system, meaning most users will be automatically enrolled unless they explicitly disable the setting.
The Microsoft-owned platform said it will start collecting and using interaction data from its AI coding assistant, GitHub Copilot, to improve model performance.
This includes:
- Code snippets entered by users
- Prompts and inputs
- AI-generated outputs and edits
- Context such as file structure and repository data
- User feedback like ratings and interactions
GitHub says this data will help build "more intelligent, context-aware" coding tools and improve accuracy across different programming languages and workflows.
[...] Users who do not want their data used for training must manually disable the setting in their account preferences.
However, enterprise-focused tiers including Copilot Business and Enterprise are excluded from the change, reflecting stricter data governance expectations in corporate environments.
GitHub says real-world developer interactions are essential to improving AI systems.
The Vatican has published a document asking about the future of humanity. It is now in several languages, including English, and there is a summary in English. Issues like AI, LLMs, transhumanism, posthumanism, social control media, and digital technology in general are raised in 164 points.
1. The method of the document on the sixtieth anniversary of Gaudium et spes [...]
6. Reason enlightened by faith must establish a critical comparison between new anthropological horizons and the perennial needs of the human condition: ‘Discernment must carefully distinguish between elements compatible with the Gospel and those contrary to it, between positive contributions and ideological aspects, but the more acute understanding of the world that results cannot fail to prompt a more penetrating appreciation of Christ the Lord and of the Gospel, since Christ is the Saviour of the world.’[5]
7. This discernment is inspired by the sixtieth anniversary of the Pastoral Constitution Gaudium et spes (1965-2025), an anniversary that points the present document towards a new reflection linked to the personal and social anthropology proposed in the Constitution and in the subsequent Magisterium that has received and developed its teaching. The unique nature of Gaudium et spes must be emphasised, a conciliar Constitution with specific magisterial value, expressed in its commitment to consider carefully the condition of humanity in today’s world. For the first time in history, a document of this level systematically proposed a vision of the human being illumined by the mystery of Christ. In its wake, therefore, we have the question of re-proposing Christian anthropology today in an open and critical dialogue with the more recent questions coming from human experience and cultures. Precisely in reference to Gaudium et spes, the document places at its centre the human being, ‘whole and entire, body and soul, heart and conscience, mind and will’,[6] in order to promote that ‘integral and solidary humanism capable of creating a new social, economic and political order, founded on the dignity and freedom of every human person, to be brought about in peace, justice and solidarity.’[7] [...]
[...] 2. The challenge of the poor
164. The relentless technological development that we consider in this text, which favours above all those who already have much power, challenges us to turn our gaze to the poorest. If this development, together with the ideologies that accompany it, involves serious risks, as we have seen, these will be even greater for the weakest and most defenceless, that is, for those who count for nothing because they are of no use to the workings of the more powerful. They run the risk of becoming waste material, ‘collateral damage’, swept away without mercy. As Christians, however, we are called to see them through the eyes of Christ, who says to each of them: ‘I have loved you.’ (Rev 3:9) As Pope Leo XIV explains, Christ ‘by his love given to the end, shows the dignity of every human being.’[199] This encourages us to ‘perceive the strong connection that exists between Christ’s love and his call to be close to the poor.’[200] From this arises the duty to be particularly attentive—as humble sentinels—to the consequences that new developments in society may have on the lives of the least among us. We must respond with a prophetic word and with generous involvement. The authenticity of our faith and the human value of our lives are at stake.
Previously:
(2015) Pope Francis to Issue Encyclical on Global Warming
(2014) Vatican Hosts Conference On Alien Life in Universe
Scientists Just Spotted a Black Hole Collision That Defies All Odds:
an international team of astronomers has detected an extraordinary cosmic event that could redefine our understanding of black hole mergers. For the first time, a binary black hole merger, observed in November 2024, has been linked with a short gamma-ray burst (GRB) , a phenomenon that was previously thought impossible. This unprecedented event, detailed in The Astrophysical Journal, could open a new frontier in multi-messenger astronomy, combining the "sound" of gravitational waves with the "flash" of high-energy light.
On November 2024, the LIGO-Virgo-KAGRA observatories captured a signal from an immense gravitational wave event, identified as S241125n. What made this discovery particularly extraordinary was the immediate detection of a gamma-ray burst (GRB) that followed just 11 seconds later. Gamma-ray bursts, known for their intense energy and brief duration, are typically associated with neutron star mergers, not black hole mergers. For a long time, scientists believed that black hole mergers would remain invisible to traditional telescopes. This new finding upends that assumption, suggesting that under the right conditions, even the darkest of cosmic collisions can emit visible radiation.
"This estimate is deliberately conservative, and the true probability of a chance alignment may be even lower," said the research team. "However, in the interest of scientific rigor, we cannot yet draw a definitive conclusion. Regardless, this is clearly a very intriguing event."
The findings suggest that the correlation between gravitational waves and a gamma-ray burst is not merely coincidental but a rare, albeit possible, occurrence.
The study, published in The Astrophysical Journal , presents compelling evidence that S241125n is a multi-messenger event that bridges gravitational waves and electromagnetic radiation, specifically gamma rays and X-rays. Gravitational waves , detected by the observatories, are ripples in spacetime caused by the violent collision of massive objects like black holes. In this case, scientists recorded the waves from a black hole merger about 4.2 billion light-years away , an astonishing distance that places the event in the early universe.
Just after the gravitational-wave signal, NASA's Swift satellite detected a short GRB, followed by an X-ray afterglow from China's Einstein Probe. These electromagnetic signals were pinpointed to the same region of the sky, making it highly improbable that they were unrelated. Such an alignment, researchers assert, could occur only once in several decades.
One of the most striking aspects of S241125n is the extreme mass of the black holes involved. The study suggests that the two black holes involved in the merger each had a mass more than 100 times that of our Sun. This is significantly larger than most previous black hole mergers detected by LIGO, which typically involve black holes with masses in the tens of solar masses. These unusually massive black holes raise intriguing questions about their origins, suggesting they might have formed through previous mergers or exotic formation processes.
The discovery challenges existing theories of black hole formation and suggests that such heavy black holes can exist in distant regions of the universe. The large mass of the merging black holes implies that these events could be observed across vast cosmic distances, opening up new possibilities for understanding the history and evolution of black holes and their environments.
The study also presents an innovative explanation for how a black hole merger could produce a short gamma-ray burst. According to the team's model, the two black holes may have merged within the dense disk of gas and dust surrounding a galaxy's central supermassive black hole, an environment known as an active galactic nucleus (AGN) . In this fuel-rich region, the merger triggered a process in which the newly formed black hole received a powerful "kick," propelling it through the surrounding material.
As the black hole moved through the gas, it rapidly accreted matter at a rate that far exceeded the typical limit for black hole growth. This intense accretion likely created powerful relativistic jets of radiation and particles, which then interacted with the dense gas, generating shockwaves. These shockwaves heated the surrounding material, eventually causing it to release high-energy photons, the burst of gamma rays observed by Swift.
If the association between the gravitational waves and gamma-ray burst is confirmed, it would mark a milestone in the field of multi-messenger astronomy, a new area of research that combines different types of cosmic signals to gain a deeper understanding of the universe. Until now, black hole mergers had only been detected through gravitational waves, offering a limited view of these cosmic events. With the potential confirmation of a gamma-ray counterpart, scientists could begin to study these mergers not just through sound but through light, expanding the tools available for investigating the most violent events in the universe.
This discovery also suggests that gravitational-wave events could be used as "standard sirens" for measuring cosmic distances. With the gamma-ray burst acting as a marker of the merger's host galaxy, scientists could refine their understanding of cosmic expansion, providing a more accurate measure of the universe's growth.
Journal Reference:
Shu-Rui Zhang, Yu Wang, Ye-Fei Yuan, et al. LVK S241125n: Massive Binary Black Hole Merger Produces Gamma Ray Burst in Active Galactic Nucleus Disk [open], The Astrophysical Journal (DOI: 10.3847/1538-4357/ae3319)
See also:
Most of us search Google the same way we always have: type a few words, scroll, click something that looks close enough, and hope. For a while, that worked. Google handed us a list of links and let us take it from there.
What's happening now is something different. A 2024 study by SparkToro found that nearly 60% of Google searches end without anyone clicking through to a website, and the trend has accelerated since. By February 2026, Ahrefs found that queries triggering AI Overviews now see a 58% reduction in clicks. Google has been systematically inserting itself between you and the original source, answering questions with AI-generated summaries before you ever reach the page those answers came from. The results you do see are filtered through an algorithm that weighs your search history, your location, and the billions of dollars advertisers have spent to appear for particular queries. Two people searching identical phrases on the same day can get meaningfully different results without either of them knowing it. And because Google controls roughly 90% of the world's search traffic, most people have no frame of reference for what a less mediated search experience would even look like.
The search bar replaced the reference desk without replacing the skills behind it: knowing how to ask a question precisely, understanding how information is organized and who funds it, knowing the difference between a primary source and a summary of one. The assumption was that the technology made all of that unnecessary, which suited Google; a user who can't navigate information independently is a user who keeps coming back to be guided.
The search bar you already have is more capable than that arrangement requires you to know. With the right syntax, it becomes a precision instrument: narrow by domain, by date, by file type, by exact phrase. We can pull up archived pages, surface open file directories, and even find what people said in forums instead of what brands want us to find. None of it requires a new tool or a paid account. The capability has been there the whole time.
Google is constantly interpreting you. It swaps in synonyms, personalizes results based on your history, and decides what you probably meant rather than returning what you typed. Most of the time that interpretation is invisible. These tools are how you override it.
Anybody have any tips or pointers to add to this?
https://gizmodo.com/attorney-hit-with-historic-fine-for-citing-ai-generated-cases-2000738651
A court in Oregon has issued a fine of $10,000 to an attorney who submitted a legal brief with citations and quotes hallucinated by AI, according to a new report from the Oregonian. It’s the highest fine yet for citing fake cases in the state and would have been higher, but the judges offered some leniency, according to the newspaper.
The attorney, identified by the Oregonian as Bill Ghiorso in Salem, submitted a legal brief to the Oregon Court of Appeals that contained 15 fake citations and nine fake quotes. Ghiorso reportedly blamed a paralegal for the AI hallucinations and initially challenged the fine.
The appeals court in Oregon first fined a different attorney for the practice back in December 2025. The three-judge panel established that this kind of issue should be met with $500 for each fake citation and $1,000 for each false quotation or statement of law. Adding up all the hallucinations, Ghiorso was first hit with a $16,500, but the judges capped that at $10,000.
https://www.engadget.com/ai/wikipedia-has-banned-ai-generated-articles-173641377.html?src=rss
English Wikipedia has banned the use of generative AI when writing or rewriting articles. The platform says it came to this decision because using AI to whip up copy "often violates several of Wikipedia's core content policies."
There are a couple of minor exceptions. Editors can use large language models (LLMs) to refine their own writing, but only if the copy is checked for accuracy. The policy states that this is because LLMs "can go beyond what you ask of them and change the meaning of the text such that it is not supported by the sources cited."
Editors can also use LLMs to assist with language translation. However, they must be fluent enough in both languages to catch errors. Once again, the information must be checked for inaccuracies.
https://phys.org/news/2026-03-ancient-alphabets-insights-uncover-hidden.html
With artificial intelligence (AI) as an essential tool, San Diego State University researchers have discovered surprising similarities among ancient writing systems from Africa and the Caucasus region of Eurasia. Their study suggests that the Armenian alphabet may be more closely related in structure to the ancient Ethiopic writing system than linguists and historians previously thought. The paper is published in the journal Digital Scholarship in the Humanities.
For many years, historians noticed some Armenian, Georgian and Caucasian Albanian letters look similar to letters from Ethiopic, also known as Ge'ez, a writing system developed in the Horn of Africa more than 1,600 years ago.
Most of these early studies, however, relied on scholars' own visual inspection of the letters to determine whether they appeared alike.
Researchers from the Department of Mechanical Engineering in the College of Engineering tested this idea using AI instead of human judgment. They trained a computer program to study more than 28,000 images of Ethiopic characters so it could learn the basic shapes and patterns in the writing system. The program learned to recognize curves, straight lines, angles and the overall structure of each letter.
Importantly, the computer had no data on history, religion, geography or culture. It only looked at shapes. After learning the Ethiopic characters, the program compared them to letters from the Armenian, Georgian and Caucasian Albanian alphabets. It then calculated how similar the shapes were.
Daniel Zemene et al, Machine learning techniques for exploring influence, commonalities, and shared origin of scripts: cases of Ethiopic, Armenian, Georgian, and Caucasian Albanian scripts, Digital Scholarship in the Humanities (2026). DOI: 10.1093/llc/fqag029